Search results for "Multivariate random variable"
showing 10 items of 14 documents
Approximations in Statistics from a Decision-Theoretical Viewpoint
1987
The approximation of the probability density p(.) of a random vector x∊X by another (possibly more convenient) probability density q(.) which belongs to a certain class Q is analyzed as a decision problem where the action space is the class Qof available approximations, the relevant uncertain event is the actual value of the vector x and the utility function is a proper scoring rule. The logarithmic divergence is shown to play a rather special role within this approach. The argument lies entirely within a Bayesian framework.
The simplex dispersion ordering and its application to the evaluation of human corneal endothelia
2009
A multivariate dispersion ordering based on random simplices is proposed in this paper. Given a R^d-valued random vector, we consider two random simplices determined by the convex hulls of two independent random samples of sizes d+1 of the vector. By means of the stochastic comparison of the Hausdorff distances between such simplices, a multivariate dispersion ordering is introduced. Main properties of the new ordering are studied. Relationships with other dispersion orderings are considered, placing emphasis on the univariate version. Some statistical tests for the new order are proposed. An application of such ordering to the clinical evaluation of human corneal endothelia is provided. Di…
R Code for Hausdorff and Simplex Dispersion Orderings in the 2D Case
2010
This paper proposes a software implementation using R of the Hausdorff and simplex dispersion orderings. A copy can be downloaded from http://www.uv.es/~ayala/software/fun-disp.R . The paper provides some examples using the functions exactHausdorff for the Hausdorff dispersion ordering and the function simplex for the simplex dispersion orderings. Some auxiliary functions are commented too.
On a representation theorem for finitely exchangeable random vectors
2016
A random vector $X=(X_1,\ldots,X_n)$ with the $X_i$ taking values in an arbitrary measurable space $(S, \mathscr{S})$ is exchangeable if its law is the same as that of $(X_{\sigma(1)}, \ldots, X_{\sigma(n)})$ for any permutation $\sigma$. We give an alternative and shorter proof of the representation result (Jaynes \cite{Jay86} and Kerns and Sz\'ekely \cite{KS06}) stating that the law of $X$ is a mixture of product probability measures with respect to a signed mixing measure. The result is "finitistic" in nature meaning that it is a matter of linear algebra for finite $S$. The passing from finite $S$ to an arbitrary one may pose some measure-theoretic difficulties which are avoided by our p…
Embedding Quantum into Classical: Contextualization vs Conditionalization
2014
We compare two approaches to embedding joint distributions of random variables recorded under different conditions (such as spins of entangled particles for different settings) into the framework of classical, Kolmogorovian probability theory. In the contextualization approach each random variable is "automatically" labeled by all conditions under which it is recorded, and the random variables across a set of mutually exclusive conditions are probabilistically coupled (imposed a joint distribution upon). Analysis of all possible probabilistic couplings for a given set of random variables allows one to characterize various relations between their separate distributions (such as Bell-type ine…
Information Functionals and the Notion of (Un)Certainty: Random Matrix Theory - Inspired Case
2007
Information functionals allow one to quantify the degree of randomness of a given probability distribution, either absolutely (through min/max entropy principles) or relative to a prescribed reference one. Our primary aim is to analyze the “minimum information” assumption, which is a classic concept (R. Balian, 1968) in the random matrix theory. We put special emphasis on generic level (eigenvalue) spacing distributions and the degree of their randomness, or alternatively — information/organization deficit.
Random Logistic Maps II. The Critical Case
2003
Let (X n )∞ 0 be a Markov chain with state space S=[0,1] generated by the iteration of i.i.d. random logistic maps, i.e., X n+1=C n+1 X n (1−X n ),n≥0, where (C n )∞ 1 are i.i.d. random variables with values in [0, 4] and independent of X 0. In the critical case, i.e., when E(log C 1)=0, Athreya and Dai(2) have shown that X n → P 0. In this paper it is shown that if P(C 1=1)<1 and E(log C 1)=0 then (i) X n does not go to zero with probability one (w.p.1) and in fact, there exists a 0<β<1 and a countable set ▵⊂(0,1) such that for all x∈A≔(0,1)∖▵, P x (X n ≥β for infinitely many n≥1)=1, where P x stands for the probability distribution of (X n )∞ 0 with X 0=x w.p.1. A is a closed set for (X n…
Stochastic order characterization of uniform integrability and tightness
2013
We show that a family of random variables is uniformly integrable if and only if it is stochastically bounded in the increasing convex order by an integrable random variable. This result is complemented by proving analogous statements for the strong stochastic order and for power-integrable dominating random variables. Especially, we show that whenever a family of random variables is stochastically bounded by a p-integrable random variable for some p>1, there is no distinction between the strong order and the increasing convex order. These results also yield new characterizations of relative compactness in Wasserstein and Prohorov metrics.
Sign test of independence between two random vectors
2003
A new affine invariant extension of the quadrant test statistic Blomqvist (Ann. Math. Statist. 21 (1950) 593) based on spatial signs is proposed for testing the hypothesis of independence. In the elliptic case, the new test statistic is asymptotically equivalent to the interdirection test by Gieser and Randles (J. Amer. Statist. Assoc. 92 (1997) 561) but is easier to compute in practice. Limiting Pitman efficiencies and simulations are used to compare the test to the classical Wilks’ test. peerReviewed
Central Limit Theorem for Linear Eigenvalue Statistics for a Tensor Product Version of Sample Covariance Matrices
2017
For $$k,m,n\in {\mathbb {N}}$$ , we consider $$n^k\times n^k$$ random matrices of the form $$\begin{aligned} {\mathcal {M}}_{n,m,k}({\mathbf {y}})=\sum _{\alpha =1}^m\tau _\alpha {Y_\alpha }Y_\alpha ^T,\quad {Y}_\alpha ={\mathbf {y}}_\alpha ^{(1)}\otimes \cdots \otimes {\mathbf {y}}_\alpha ^{(k)}, \end{aligned}$$ where $$\tau _{\alpha }$$ , $$\alpha \in [m]$$ , are real numbers and $${\mathbf {y}}_\alpha ^{(j)}$$ , $$\alpha \in [m]$$ , $$j\in [k]$$ , are i.i.d. copies of a normalized isotropic random vector $${\mathbf {y}}\in {\mathbb {R}}^n$$ . For every fixed $$k\ge 1$$ , if the Normalized Counting Measures of $$\{\tau _{\alpha }\}_{\alpha }$$ converge weakly as $$m,n\rightarrow \infty $$…